human computation
Improving AI-generated music with user-guided training
Singh, Vishwa Mohan, Aryasomayajula, Sai Anirudh, Chatterjee, Ahan, Aydemir, Beste, Amin, Rifat Mehreen
AI music generation has advanced rapidly, with models like diffusion and autoregressive algorithms enabling high-fidelity outputs. These tools can alter styles, mix instruments, or isolate them. Since sound can be visualized as spectrograms, image-generation algorithms can be applied to generate novel music. However, these algorithms are typically trained on fixed datasets, which makes it challenging for them to interpret and respond to user input accurately. This is especially problematic because music is highly subjective and requires a level of personalization that image generation does not provide. In this work, we propose a human-computation approach to gradually improve the performance of these algorithms based on user interactions. The human-computation element involves aggregating and selecting user ratings to use as the loss function for fine-tuning the model. We employ a genetic algorithm that incorporates user feedback to enhance the baseline performance of a model initially trained on a fixed dataset. The effectiveness of this approach is measured by the average increase in user ratings with each iteration. In the pilot test, the first iteration showed an average rating increase of 0.2 compared to the baseline. The second iteration further improved upon this, achieving an additional increase of 0.39 over the first iteration.
- Media > Music (0.93)
- Leisure & Entertainment (0.93)
LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs
Wu, Tongshuang, Zhu, Haiyi, Albayrak, Maya, Axon, Alexis, Bertsch, Amanda, Deng, Wenxing, Ding, Ziqi, Guo, Bill, Gururaja, Sireesh, Kuo, Tzu-Sheng, Liang, Jenny T., Liu, Ryan, Mandal, Ihita, Milbauer, Jeremiah, Ni, Xiaolin, Padmanabhan, Namrata, Ramkumar, Subhashini, Sudjianto, Alexis, Taylor, Jordan, Tseng, Ying-Jui, Vaidos, Patricia, Wu, Zhijin, Wu, Wei, Yang, Chenyang
LLMs have shown promise in replicating human-like behavior in crowdsourcing tasks that were previously thought to be exclusive to human abilities. However, current efforts focus mainly on simple atomic tasks. We explore whether LLMs can replicate more complex crowdsourcing pipelines. We find that modern LLMs can simulate some of crowdworkers' abilities in these "human computation algorithms," but the level of success is variable and influenced by requesters' understanding of LLM capabilities, the specific skills required for sub-tasks, and the optimal interaction modality for performing these sub-tasks. We reflect on human and LLMs' different sensitivities to instructions, stress the importance of enabling human-facing safeguards for LLMs, and discuss the potential of training humans and LLMs with complementary skill sets. Crucially, we show that replicating crowdsourcing pipelines offers a valuable platform to investigate (1) the relative strengths of LLMs on different tasks (by cross-comparing their performances on sub-tasks) and (2) LLMs' potential in complex tasks, where they can complete part of the tasks while leaving others to humans.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- South America > Brazil > Rio de Janeiro > South Atlantic Ocean (0.04)
- (9 more...)
Trustworthy Human Computation: A Survey
Kashima, Hisashi, Oyama, Satoshi, Arai, Hiromi, Mori, Junichiro
Human computation is an approach to solving problems that prove difficult using AI only, and involves the cooperation of many humans. Because human computation requires close engagement with both "human populations as users" and "human populations as driving forces," establishing mutual trust between AI and humans is an important issue to further the development of human computation. This survey lays the groundwork for the realization of trustworthy human computation. First, the trustworthiness of human computation as computing systems, that is, trust offered by humans to AI, is examined using the RAS (Reliability, Availability, and Serviceability) analogy, which define measures of trustworthiness in conventional computer systems. Next, the social trustworthiness provided by human computation systems to users or participants is discussed from the perspective of AI ethics, including fairness, privacy, and transparency. Then, we consider human--AI collaboration based on two-way trust, in which humans and AI build mutual trust and accomplish difficult tasks through reciprocal collaboration. Finally, future challenges and research directions for realizing trustworthy human computation are discussed.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
- North America > United States > Hawaii (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Leisure & Entertainment > Games (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- (2 more...)
Lan
Hybrid human-machine query processing systems, such as crowd-powered database systems, aim to broaden the scope of questions users can ask about their data by incorporating human computation to support queries that may be subjective and/or require visual or semantic interpretation. A common type of query involves filtering data by several criteria, some of which need human computation to be evaluated. For example, filtering a set of hotels for those that both (1) have great views from the rooms, and (2) have a fitness center. Criteria can differ in the amount of human effort required to decide if data satisfy them, due to criterion's subjectivity and difficulty. There is potential to reduce crowdsourcing costs by ordering the evaluation of each of the criteria such that criteria needing more human computation are not processed for data that have not satisfied the less costly criteria. Unfortunately, for queries specified on-the-fly, the information about subjectivity and difficulty is unknown a priori. To overcome this challenge, we present Dynamic Filter, an adaptive query processing algorithm that dynamically changes the order in which criteria are evaluated based on observations while the query is running. Using crowdsourced data from a popular crowdsourcing platform, we show that Dynamic Filter can effectively adapt the processing order and approach the performance of a "clairvoyant" algorithm.
The Role of Human Computation in a Changing Technology Landscape: Expert Weigh In
Chris Welty from Alphabet, Kumar Chellapilla from Amazon, Besmira Nushi from Microsoft, Markus Krause from Brainworks, Olga Megorskaya from Toloka, and Lora Aroyo from Google Research shared their views. Chris explains that human computation usually takes two forms -- explicit and implicit. Implicit refers to situations when data labeling is a by-product, for example when we stream movies, listen to music on YouTube, or do web searches. The system's algorithm learns about its users even though no data labeling as such takes place. On the other hand, more "traditional" forms of data labeling like crowdsourcing are explicit.
In Search of Ambiguity: A Three-Stage Workflow Design to Clarify Annotation Guidelines for Crowd Workers
Pradhan, Vivek Krishna, Schaekermann, Mike, Lease, Matthew
While crowdsourcing now enables labeled data to be obtained more quickly, cheaply, and easily than ever before (Snow et al., 2008; Alonso, 2015; Sorokin and Forsyth, 2008), ensuring data quality remains something of an art, challenge, and perpetual risk. Consider a typical workflow for annotating data on Amazon Mechanical Turk (MTurk): a requester designs an annotation task, asks multiple workers to complete it, and then post-processes labels to induce final consensus labels. Because the annotation work itself is largely opaque, with only submitted labels being observable, the requester typically has little insight into what if any problems workers encounter during annotation. While statistical aggregation (Sheshadri and Lease, 2013; Hung et al., 2013; Zheng et al., 2017) and multi-pass iterative refinement (Little et al., 2010a; Goto et al., 2016) methods can be employed to further improve initial labels, there are limits to what can be achieved by post-hoc refinement following label collection. If initial labels are poor because many workers were confused by incomplete, unclear, or ambiguous task instructions, there is a significant risk of "garbage in equals garbage out" (Vidgen and Derczynski, 2020). In contrast, consider a more traditional annotation workflow involving trusted annotators, such as practiced by the Linguistic Data Consortium (LDC) (Griffitt and Strassel, 2016).
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > Hawaii (0.04)
Conceptualization and Framework of Hybrid Intelligence Systems
Prakash, Nikhil, Mathewson, Kory W.
As artificial intelligence (AI) systems are getting ubiquitous within our society, issues related to its fairness, accountability, and transparency are increasing rapidly. As a result, researchers are integrating humans with AI systems to build robust and reliable hybrid intelligence systems. However, a proper conceptualization of these systems does not underpin this rapid growth. This article provides a precise definition of hybrid intelligence systems as well as explains its relation with other similar concepts through our proposed framework and examples from contemporary literature. The framework breakdowns the relationship between a human and a machine in terms of the degree of coupling and the directive authority of each party. Finally, we argue that all AI systems are hybrid intelligence systems, so human factors need to be examined at every stage of such systems' lifecycle.
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (0.73)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.31)
Human computation requires and enables a new approach to ethical review
Vepřek, Libuše Hannah, Seymour, Patricia, Michelucci, Pietro
With humans increasingly serving as computational elements in distributed information processing systems and in consideration of the profit-driven motives and potential inequities that might accompany the emerging thinking economy[1], we recognize the need for establishing a set of related ethics to ensure the fair treatment and wellbeing of online cognitive laborers and the conscientious use of the capabilities to which they contribute. Toward this end, we first describe human-in-the-loop computing in context of the new concerns it raises that are not addressed by traditional ethical research standards. We then describe shortcomings in the traditional approach to ethical review and introduce a dynamic approach for sustaining an ethical framework that can continue to evolve within the rapidly shifting context of disruptive new technologies.
- North America > United States > New York > Tompkins County > Ithaca (0.05)
- North America > United States > New York > New York County > New York City (0.05)
- Africa > Tanzania (0.05)
- (6 more...)
- Health & Medicine > Therapeutic Area (0.96)
- Education > Educational Setting > Online (0.47)
- Education > Educational Technology > Educational Software > Computer Based Training (0.46)
AAAI Conferences Calendar
This page includes forthcoming AAAI sponsored conferences, conferences presented by AAAI Affiliates, and conferences held in cooperation with AAAI. AI Magazine also maintains a calendar listing that includes nonaffiliated conferences at www.aaai.org/Magazine/calendar.php. ICAIL-2019 will be held 17-21 June in USA. SoCS-19 will be held July 16-17 2019 (immediately AAAI Fall Symposium Series. IAAI-20 Conference will be held February 9-11, 2020 at the Hilton New York Midtown Hotel in New York, New York USA.
- North America > United States > New York > New York County > New York City (0.56)
- North America > United States > Virginia > Arlington County > Arlington (0.06)
- North America > United States > Florida > Sarasota County > Sarasota (0.06)
- (8 more...)